9 research outputs found

    Attention to the model's face when learning from video modeling examples in adolescents with and without autism spectrum disorder

    Get PDF
    We investigated the effects of seeing the instructor's (i.e., the model's) face in video modeling examples on students' attention and their learning outcomes. Research with university students suggested that the model's face attracts students' attention away from what the model is doing, but this did not hamper learning. We aimed to investigate whether we would replicate this finding in adolescents (prevocational education) and to establish how adolescents with autism spectrum disorder, who have been found to look less at faces generally, would process video examples in which the model's face is visible. Results showed that typically developing adolescents who did see the model's face paid significantly less attention to the task area than typically developing adolescents who did not see the model's face. Adolescents with autism spectrum disorder paid less attention to the model's face and more to the task demonstration area than typically developing adolescents who saw the model's face. These differences in viewing behavior, however, did not affect learning outcomes. This study provides further evidence that seeing the model's face in video examples affects students' attention but not their learning outcomes

    Task Experience as a Boundary Condition for the Negative Effects of Irrelevant Information on Learning

    Get PDF
    Research on multimedia learning has shown that learning is hampered when a multimedia message includes extraneou

    On the relation between action selection and movement control in 5- to 9-month-old infants

    Get PDF
    Although 5-month-old infants select action modes that are adaptive to the size of the object (i.e., one- or two-handed reaching), it has largely remained unclear whether infants of this age control the ensuing movement to the size of the object (i.e., scaling of the aperture between hands). We examined 5-, 7-, and 9-month-olds’ reaching behaviors to gain more insight into the developmental changes occurring in the visual guidance of action mode selection and movement control, and the relationship between these processes. Infants were presented with a small set of objects (i.e., 2, 3, 7, and 8 cm) and a large set of objects (i.e., 6, 9, 12, and 15 cm). For the first set of objects, it was found that the infants more often performed two-handed reaches for the larger objects based on visual information alone (i.e., before making contact with the object), thus showing adaptive action mode selection relative to object size. Kinematical analyses of the two-handed reaches for the second set of objects revealed that inter-trial variance in aperture between the hands decreased with the approach toward the object, indicating that infants’ reaching is constrained by the object. Subsequent analysis showed that between hand aperture scaled to object size, indicating that visual control of the movement is adjusted to object size in infants as young as 5 months. Individual analyses indicated that the two processes were not dependent and followed distinct developmental trajectories. That is, adaptive selection of an action mode was not a prerequisite for appropriate aperture scaling, and vice versa. These findings are consistent with the idea of two separate and independent visual systems (Milner and Goodale in Neuropsychologia 46:774–785, 2008) during early infancy

    Learning from video modeling examples : Content kept equal, adults are more effective models than peers

    No full text
    Learning from (video) modeling examples in which a model demonstrates how to perform a task is an effective instructional strategy. The model-observer similarity (MOS) hypothesis postulates that (perceived) similarity between learners and the model in terms of age or expertise moderates the effectiveness of modeling examples. Findings have been mixed, however, possibly because manipulations of MOS were often associated with differences in example content and manipulations of (perceived) expertise confounded with age. Therefore, we investigated whether similarity with the model in terms of age and putative expertise would affect cognitive and motivational aspects of learning when the example content is kept equal across conditions. Adolescents (N = 157) watched a short video in which a peer or adult model was introduced as having low or high expertise, followed by two video modeling examples in which the model demonstrated how to troubleshoot electrical circuit problems. Results showed no effects of putative expertise. In contrast to the MOS hypothesis, adult models were more effective and efficient to learn from than peer models

    Learning from video modeling examples : Content kept equal, adults are more effective models than peers

    No full text
    Learning from (video) modeling examples in which a model demonstrates how to perform a task is an effective instructional strategy. The model-observer similarity (MOS) hypothesis postulates that (perceived) similarity between learners and the model in terms of age or expertise moderates the effectiveness of modeling examples. Findings have been mixed, however, possibly because manipulations of MOS were often associated with differences in example content and manipulations of (perceived) expertise confounded with age. Therefore, we investigated whether similarity with the model in terms of age and putative expertise would affect cognitive and motivational aspects of learning when the example content is kept equal across conditions. Adolescents (N = 157) watched a short video in which a peer or adult model was introduced as having low or high expertise, followed by two video modeling examples in which the model demonstrated how to troubleshoot electrical circuit problems. Results showed no effects of putative expertise. In contrast to the MOS hypothesis, adult models were more effective and efficient to learn from than peer models

    Response to "Central corneal thickness evaluation in diabetic peripheral neuropathy"

    Get PDF
    Displays of eye movements may convey information about cognitive processes but require interpretation. We investigated whether participants were able to interpret displays of their own or others' eye movements. In Experiments 1 and 2, participants observed an image under three different viewing instructions. Then they were shown static or dynamic gaze displays and had to judge whether it was their own or someone else's eye movements and what instruction was reflected. Participants were capable of recognizing the instruction reflected in their own and someone else's gaze display. Instruction recognition was better for dynamic displays, and only this condition yielded above chance performance in recognizing the display as one's own or another person's (Experiments 1 and 2). Experiment 3 revealed that order information in the gaze displays facilitated instruction recognition when transitions between fixated regions distinguish one viewing instruction from another. Implications of these findings are discussed

    Inferring task performance and confidence from displays of eye movements

    Get PDF
    Eye movements reveal what is at the center of people's attention, which is assumed to coincide with what they are thinking about. Eye-movement displays (visualizations of a person's fixations superimposed onto the stimulus, for example, as dots or circles) might provide useful information for diagnosing that person's performance. However, making inferences about a person's task performance based on eye-movement displays requires substantial interpretation. Using graph-comprehension tasks, we investigated to what extent observers (N = 46) could make accurate inferences about a performer's multiple-choice task performance (i.e., chosen answer), confidence, and competence from displays of that person's eye movements. Observers' accuracy when judging which answer the performer chose was above chance level and was higher for displays reflecting confident performance. Observers were also able to infer performers' confidence from the eye-movement displays; moreover, their own task performance and perceived similarity with the performer affected their judgments of the other's competence

    Catching moving objects: Differential effects of background motion on action mode selection and movement control in 6- to 10-month-old infants

    No full text
    In human adults the use of visual information for selecting appropriate modes for action appears to be separate from the use of visual information for the control of movements of which the action is composed (Milner & Goodale, [1995] The visual brain in action; [2008] Neuropsychologia 46:774-785). More specifically, action mode selection primarily relies upon allocentric information, whereas movement control mainly exploits egocentric information. In the present study, we investigated to what degree this division is already present in 6- to 10-month-old infants when reaching for moving objects; that is, whether allocentric information is uniquely exploited for action mode selection (i.e., reaching with one or the other hand) or whether it is also used for movement control (i.e., reaching kinematics). Infants were presented with laterally approaching objects at two speeds (i.e., 20 and 40 cm/s) against a stationary or moving background. Background motion affects allocentric information about the object's velocity relative to its background. Results indicated that object speed constrained both infants' action mode selection and movement control. Importantly, however, the influence of background motion was limited to action mode selection and did not extend to movement control. The findings provide further support for the contention that during early development information usage is-at least to some degree-separated for action mode selection and movement control
    corecore